Search results for "Kernel embedding of distributions"
showing 10 items of 18 documents
Reproducing kernel hilbert spaces regression methods for genomic assisted prediction of quantitative traits.
2008
Abstract Reproducing kernel Hilbert spaces regression procedures for prediction of total genetic value for quantitative traits, which make use of phenotypic and genomic data simultaneously, are discussed from a theoretical perspective. It is argued that a nonparametric treatment may be needed for capturing the multiple and complex interactions potentially arising in whole-genome models, i.e., those based on thousands of single-nucleotide polymorphism (SNP) markers. After a review of reproducing kernel Hilbert spaces regression, it is shown that the statistical specification admits a standard mixed-effects linear model representation, with smoothing parameters treated as variance components.…
Optimized Kernel Entropy Components
2016
This work addresses two main issues of the standard Kernel Entropy Component Analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of by variance as in Kernel Principal Components Analysis. In this work, we propose an extension of the KECA method, named Optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular…
Semisupervised nonlinear feature extraction for image classification
2012
Feature extraction is of paramount importance for an accurate classification of remote sensing images. Techniques based on data transformations are widely used in this context. However, linear feature extraction algorithms, such as the principal component analysis and partial least squares, can address this problem in a suboptimal way because the data relations are often nonlinear. Kernel methods may alleviate this problem only when the structure of the data manifold is properly captured. However, this is difficult to achieve when small-size training sets are available. In these cases, exploiting the information contained in unlabeled samples together with the available training data can si…
Kernel-Based Inference of Functions Over Graphs
2018
Abstract The study of networks has witnessed an explosive growth over the past decades with several ground-breaking methods introduced. A particularly interesting—and prevalent in several fields of study—problem is that of inferring a function defined over the nodes of a network. This work presents a versatile kernel-based framework for tackling this inference problem that naturally subsumes and generalizes the reconstruction approaches put forth recently for the signal processing by the community studying graphs. Both the static and the dynamic settings are considered along with effective modeling approaches for addressing real-world problems. The analytical discussion herein is complement…
Model selection based product kernel learning for regression on graphs
2013
The choice of a suitable graph kernel is intrinsically hard and often cannot be made in an informed manner for a given dataset. Methods for multiple kernel learning offer a possible remedy, as they combine and weight kernels on the basis of a labeled training set of molecules to define a new kernel. Whereas most methods for multiple kernel learning focus on learning convex linear combinations of kernels, we propose to combine kernels in products, which theoretically enables higher expressiveness. In experiments on ten publicly available chemical QSAR datasets we show that product kernel learning is on no dataset significantly worse than any of the competing kernel methods and on average the…
A structural cluster kernel for learning on graphs
2012
In recent years, graph kernels have received considerable interest within the machine learning and data mining community. Here, we introduce a novel approach enabling kernel methods to utilize additional information hidden in the structural neighborhood of the graphs under consideration. Our novel structural cluster kernel (SCK) incorporates similarities induced by a structural clustering algorithm to improve state-of-the-art graph kernels. The approach taken is based on the idea that graph similarity can not only be described by the similarity between the graphs themselves, but also by the similarity they possess with respect to their structural neighborhood. We applied our novel kernel in…
Learning with the kernel signal to noise ratio
2012
This paper presents the application of the kernel signal to noise ratio (KSNR) in the context of feature extraction to general machine learning and signal processing domains. The proposed approach maximizes the signal variance while minimizes the estimated noise variance in a reproducing kernel Hilbert space (RKHS). The KSNR can be used in any kernel method to deal with correlated (possibly non-Gaussian) noise. We illustrate the method in nonlinear regression examples, dependence estimation and causal inference, nonlinear channel equalization, and nonlinear feature extraction from high-dimensional satellite images. Results show that the proposed KSNR yields more fitted solutions and extract…
An Introduction to Kernel Methods
2009
Machine learning has experienced a great advance in the eighties and nineties due to the active research in artificial neural networks and adaptive systems. These tools have demonstrated good results in many real applications, since neither a priori knowledge about the distribution of the available data nor the relationships among the independent variables should be necessarily assumed. Overfitting due to reduced training data sets is controlled by means of a regularized functional which minimizes the complexity of the machine. Working with high dimensional input spaces is no longer a problem thanks to the use of kernel methods. Such methods also provide us with new ways to interpret the cl…
Weighted samples, kernel density estimators and convergence
2003
This note extends the standard kernel density estimator to the case of weighted samples in several ways. In the first place I consider the obvious extension by substituting the simple sum in the definition of the estimator by a weighted sum, but I also consider other alternatives of introducing weights, based on adaptive kernel density estimators, and consider the weights as indicators of the informational content of the observations and in this sense as signals of the local density of the data. All these ideas are shown using the Penn World Table in the context of the macroeconomic convergence issue.
Gamma Kernel Intensity Estimation in Temporal Point Processes
2011
In this article, we propose a nonparametric approach for estimating the intensity function of temporal point processes based on kernel estimators. In particular, we use asymmetric kernel estimators characterized by the gamma distribution, in order to describe features of observed point patterns adequately. Some characteristics of these estimators are analyzed and discussed both through simulated results and applications to real data from different seismic catalogs.